Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Survey polled 4,000 U.S. residents in late 2025:
This survey appears to highlight the growing opposition to data center construction in America. Around half of all previously announced data center projects have been delayed or cancelled entirely. Often this was for financial or component supply issues — such as Chinese power transformer shortages — but growing opposition from local lawmakers and communities about their impact on water and air quality, and electricity prices, has also been a factor.
In total, 47% of respondents said they opposed the construction of new AI data centers in their neighborhood, with just 38% saying they supported it. That support was spread differently throughout various age ranges, however.
Of those questioned, 50% of Millennial age respondents said they either somewhat or strongly supported the creation of new AI data centers in their neighborhood or local area. This was closely followed by 48% of Gen Z respondents. There was a large drop off after that, with only 38% of Gen X saying they supported their creation. Baby boomers were the least enthusiastic, with just 22% claiming they felt the same.
In what is perhaps an example of the current U.S. administration’s influence and its entanglement with top tech firms, 49% of surveyed Republicans claimed they would support new data center creation in their local area. This stood in stark contrast to just 36% of Democrats. A causal trend could also be drawn from the fact that Republican voting states and counties tend to be more rural, with less economic activity. Data center projects do require construction, and there is the potential for local job creation.
When it came to homeowners and renters, surprisingly, it was the homeowners who were more likely to support it, with 39% versus 36% of renters claiming they either somewhat or strongly supported new data center development in their neighborhood.
City councils that back data center projects are being voted out, other city councils are putting moratoriums on data center construction, and instances of more extreme violence towards AI companies and their employees are becoming more common.
Although this latest survey does show that there is some support for data center creation, the fact that the opposition is in the majority suggests that any data center projects that haven’t already been delayed or postponed are likely to face increasingly terse pushback that could derail their eventual development entirely.
Considering this survey was from November 2025, too, there has been further evidence of pushback against hyperscalers in recent months.
That’s a core component of many people’s misgivings with AI in general. Executives aren’t increasing their returns because of it, and companies are finding it hasn’t boosted productivity much either. It’s also becoming ever more expensive to run. Although there are outliers, the companies appearing to benefit the most from AI are the companies developing it, although they aren’t making anything close to a profit, aside from the chipmaking industry itself.
Even the hyperscalers like Oracle, which have received hundreds of billions of dollars worth of compute orders since the large-scale AI buildout began in 2025, are heavily reliant on AI developers like OpenAI paying their bills. Considering OpenAI specifically is struggling to make the kind of money that would allow it to make good on those orders, the list of beneficiaries of new data center developments could be small.
https://scitechdaily.com/after-100-years-scientists-uncover-hidden-rule-governing-cosmic-rays/
More than 100 years after their discovery, cosmic rays continue to puzzle scientists. These extremely energetic particles travel across the universe from distant and powerful sources. The DAMPE (Dark Matter Particle Explorer) space telescope is working to better understand them, including whether dark matter plays a role in how they form.
This international project, which includes the University of Geneva (UNIGE), has now uncovered an important new clue. Researchers have identified a shared feature among these particles, and the findings were published in Nature.
Cosmic rays are the highest-energy particles ever detected, far exceeding anything produced by human-made accelerators on Earth. Their origins remain uncertain, though scientists suspect they are created in extreme environments such as supernova explosions, jets from black holes, or pulsars.
Launched in December 2015, the DAMPE space telescope was designed to investigate these questions. The mission includes major contributions from the astrophysics group at UNIGE’s Department of Nuclear and Particle Physics (DPNC). By analyzing highly precise data, researchers discovered a consistent pattern in the energy distribution of primary cosmic ray nuclei, from protons to iron.
“Cosmic rays are primarily composed of protons, but also of helium, carbon, oxygen, and iron nuclei,” explains Andrii Tykhonov, associate professor at the DPNC in the Faculty of Science at UNIGE, and co-author of the study. “These particles are also categorized according to their energy: low, up to a few billion electron-volts; intermediate, from a few billion to several hundred billion electron-volts; and high, from 1,000 billion electron-volts and beyond.”
The team found that the number of particles drops off more sharply after a certain energy level. This effect, known as “spectral softening,” reflects a steeper decline than the gradual decrease normally seen as energy increases.
This shift occurs at a rigidity of about 15 TV (teraelectron-volts) (about 15 trillion electron-volts). Rigidity describes how much a particle’s path is influenced by magnetic fields.
Finding the same pattern at this rigidity across different types of nuclei supports models where both the acceleration and movement of cosmic rays depend on rigidity. Competing ideas that focus on energy per nucleon (energy divided by the number of nucleons in the particle) are strongly challenged by the data, with a confidence level of 99.999%.
Researchers at UNIGE played a key role in this work. They developed advanced artificial intelligence methods to reconstruct particle events and contributed to precise measurements of proton and helium fluxes, along with carbon analysis. The team also led the development of a major DAMPE instrument, the Silicon-Tungsten Tracker (STK), which allows scientists to accurately trace particle paths and measure their charge.
These findings bring scientists closer to understanding where cosmic rays come from and how they travel through the galaxy. The results place new limits on theories about particle acceleration in extreme astrophysical environments and improve models of how these particles move through interstellar space.
Reference: The DAMPE Collaboration. Charge-dependent spectral softenings of primary cosmic rays below the knee. Nature 653, 52–55 (2026). https://doi.org/10.1038/s41586-026-10472-0
https://www.techspot.com/news/112309-google-chrome-has-silently-pushing-4gb-ai-model.html
Google started turning Chrome, the world's most popular web browser, into an AI browser last year in response to threats from popular AI-native rivals such as OpenAI. Recent reports have uncovered that this transition includes silently installing a large cache of AI weights on an unknown but potentially significant number of devices.
Google Chrome users who have noticed unusual disk activity or unexplained drops in available storage should look for a folder called "OptGuideOnDeviceModel" inside their Chrome directory. It holds roughly 4GB of weights for Google's Gemini Nano LLM, downloaded by the browser without user consent.
Deleting the folder offers no lasting relief – Chrome will simply redownload it. On Windows 11, the folder resides at %LOCALAPPDATA%\Google\Chrome\User Data\OptGuideOnDeviceModel. It has also been confirmed on Apple Silicon and Ubuntu machines.
Uninstalling Chrome entirely is the most effective way to remove the weights. However, those who wish to continue using the browser might be able to disable the download by entering "chrome://flags" into the address bar, finding an item called "Enables optimization guide on device on Android," and selecting "Disabled" from the adjacent dropdown menu. This is also how users can determine whether their device is eligible for the feature.
Firefox has a single kill switch for all AI features. Go to Settings – AI Controls – Toggle on, "Block AI enhancements."
Microsoft has recently come under fire for how its Edge browser handles your saved passwords. A security expert named Tom Jøran Sønstebyseter Rønning has shared a worrying discovery about the Microsoft Edge web browser. It turns out that when you use Edge to save your passwords, the browser turns them into plaintext as soon as the app starts.
For context, Plaintext means the passwords are not scrambled or hidden. They sit in the computer memory as plain words that anyone with administrative privileges or SYSTEM-level access can read.
Rønning shared these findings at a tech event in Oslo called Big Bite of Tech 26. The event was hosted by the research firm Palo Alto Networks Norway. He explained that Edge is the only browser he tested that works this way, whereas other browsers like Google Chrome are safer because they use a method called App-Bound Encryption (ABE).
This feature locks the passwords to the specific browser app and only unscrambles them when you actually need to log in to a site. Once you are done, the browser hides them again.
The main worry is that these passwords stay in the computer memory even if you never visit the websites they belong to. To show how easy it is to see this data, Rønning created a tool called EdgeSavedPasswordsDumper and put it on GitHub.
This tool proves that if a hacker or an infostealer gets control of a computer, they can scan the process memory of the browser to find these saved passwords.
This is a big deal for offices that use terminal servers, Citrix, or Virtual Desktop Infrastructure (VDI), where many people share one machine. In these shared setups, an attacker with administrative rights can perform cross-process memory access to see the data of every user who is logged in and then steal passwords from people who aren't even using the browser at that moment.
When Rønning told Microsoft about this, the company said the setup was by design. The company maintains that they have to balance how fast the browser works with how safe it is. They believe that if a hacker has already gained in-depth access to your computer to scan the memory, the device is already in big trouble.
Because Microsoft doesn't plan to change this soon, some experts suggest changing how you save your details. While Chrome uses better protection to stop other processes from stealing its keys, no browser is perfect. So, it's better to use a separate password app instead of saving them inside your web browser, as this will keep your data away from the browser's memory, where hackers can easily find it.
Experts shared their thoughts with Hackread.com, warning that this design choice creates a massive safety gap. Craig Lurey, from the Chicago-based firm Keeper Security, noted that while Windows tries to keep apps separate, one program can still often "pillage" the memory of another.
He added that since plaintext passwords exist in Edge's memory, other processes can read them "without restriction." To fight this, his firm created Keeper Forcefield, which uses kernel-level protection to block hackers from reading app memory even if the computer is already compromised.
Morey Haber, from the Atlanta-based firm BeyondTrust, also criticised the move. He explained that passwords should be "transient secrets" that are used and then quickly discarded. "The moment a password is retained in clear text memory... it stops being an authentication mechanism and becomes a liability," Haber warned. He added that if a password can be read in memory by a human or a malicious process, "it is already compromised."
These numbers describe a market that has bifurcated with unusual speed. Just 18 months ago, Nvidia supplied the vast majority of AI training and inference silicon used by Chinese cloud providers. Today, Huawei's Ascend 950PR is the primary procurement target for China's largest tech companies, and a training-focused successor named the 950DT is scheduled for Q4 this year.
The 950PR is currently the only Chinese-made AI processor that supports FP8, a compressed numerical format that allows more operations per second and lowers per-query costs. V4 uses a Mixture-of-Experts architecture with up to 1 trillion total parameters but activates only around 37 billion per inference pass. That favors inference-efficient hardware, which plays to the 950PR's strengths over its limitations in raw training throughput.
DeepSeek gave Huawei early optimization access, but didn’t extend the same to Nvidia or AMD. While V4's open weights are released in standard formats compatible with CUDA-based frameworks, DeepSeek's own infrastructure runs on Huawei Ascend silicon. The collaboration has pulled forward procurement timelines across the Chinese cloud industry, and chip prices for the 950PR have reportedly risen by about 20% as a result of the demand.
Meanwhile, SMIC has been working on expanding its advanced-node capacity for more than a year. The goal is a five-fold increase over a period of two years that’ll lift 7nm and 5nm production to 100,000 wafers per month and half a million by 2030. In addition, the combined capacity for 22nm and below could rise from 30,000-50,000 wafer starts per month in 2025 to 50,000-60,000 or higher this year. Huawei is adding two dedicated fabrication plants, though ownership structures remain unclear. Once fully operational, those facilities could exceed the current output of comparable lines at SMIC.
Yields remain a thorn in China’s side, with SMIC’s 7nm-class process delivering substantially fewer good dies per wafer than TSMC’s equivalent nodes, and the 950PR is likely to be a much larger chip than a TSMC equivalent. SMIC’s cycle time from wafer start to finished and packaged as an Ascend processor is also a problem, currently sitting at around eight months, according to estimates from JP Morgan. For similar nodes at TSMC, it’s around three months.
Then there’s HBM — Huawei announced in September that it had developed its own HBM chips with up to 1.6 TB/s bandwidth, HiBL 1.0, and HiZQ 2.0, in partnership with CXMT, but how quickly CXMT can ramp production of competitive HBM remains an open question.
The H200, which Nvidia received U.S. licenses to sell to China earlier this year, hasn’t shipped a single unit despite receiving orders. Contradictory regulatory requirements from Washington and Beijing created a stalemate at customs: U.S. regulators require that H200 chips ordered by Chinese customers be used only inside China, while Beijing has instructed domestic technology companies to limit Nvidia hardware to overseas operations.
Nvidia confirmed in its FY2026 10-K filing that it’s "effectively foreclosed from competing in China's data center computing market" and is not assuming any data center compute revenue from the region in its current outlook. Bernstein analysts estimated earlier this year that Nvidia’s share of the China AI GPU market could fall to roughly 8% in the coming years, down from 66% in 2024, both due to U.S. restrictions and because domestic vendors are being pushed to cover up to 80% of demand from domestic sources. TrendForce projected in December that China's high-end AI chip market would grow by more than 60% in 2026, with domestic suppliers capturing about half of the total.
Huawei compensates by linking large numbers of processors via optical interconnects. Its CloudMatrix 384 system combines twelve racks of Ascend modules into a 384-processor fabric delivering roughly 300 PFLOPS, though at nearly four times the power draw of Nvidia's comparable GB200-based configurations.
The 950PR is primarily an inference chip, though; the training-focused 950DT, expected in Q4, is designed for deep learning workloads and could narrow the gap with Nvidia's Hopper generation for model training tasks. Until it ships, Chinese firms that need to train large foundation models domestically face constraints that inference silicon can’t fully solve.
As for Huawei's CANN software ecosystem, it’s now thought to have more than four million developers, but it remains far smaller than Nvidia's CUDA install base. Whether CANN can attract enough third-party development to become self-sustaining remains to be seen. For now, commercial momentum is running in Huawei's favor inside China, driven by the simple absence of alternatives.
[Ed's Comment - From Wikipedia, the free encyclopedia:
The French HADOPI law (French: Haute Autorité pour la Diffusion des Œuvres et la Protection des droits d'auteur sur Internet,[1][a] English: "Supreme Authority for the Distribution of Works and Protection of Copyright on the Internet") or Creation and Internet law (French: la loi Création et Internet) was introduced during 2009, providing what is known as a graduated response as a means to encourage compliance with copyright laws. HADOPI is the acronym of the government agency created to administer it.
Comment Ends --JR]
Today, the Conseil d’État (the French Administrative Supreme Court) ruled [PDF in French -Ed] in favor of La Quadrature du Net, French Data Network (FDN), Franciliens.net and Fédération FDN [sites in French -Ed]. It recognised that Hadopi's surveillance system (operated by Arcom since 2021) is a breach of fundamental rights protected by the European Union. As a result, it has ordered the government to repeal the core provisions of Hadopi key decree that organises the "graduated response" system. This fight against Hadopi, in which La Quadrature is involved since the first legislative debates in the National Assembly in 2009, is emblematic of the archaic view held by successive governments, both left-wing and right-wing, on the question of sharing online culture and knowledge. It is now up to the government to acknowledge the death of Hadopi and, instead of attempting to bring it back to life, to finally admit that online cultural sharing for non-commercial purposes must not be criminalised.
La Quadrature du Net started its challenge in court back in 2009 as to whether the law was actually compatible with European Union Law and human rights. The law was named after the The French Copyright Authority (HADOPI).
Previously:
(2026) France Keeps Breaking the Internet to Stop Piracy, Even Though It's Not Working
(2021) France Gets a New Anti-Piracy Agency in 2022
Apple has agreed to pay $250 million to settle a class action lawsuit that accused it of misleading customers about the availability of its Apple Intelligence features. The proposed settlement would apply to people in the US who purchased all models of the iPhone 16 and the iPhone 15 Pro between June 10th, 2024 and March 29th, 2025.
People who submit qualifying claims can receive $25 for each eligible device, "which may decrease or increase up to $95 per device, depending on claim volume and other factors," according to Clarkson Law Firm, the legal team behind the class action lawsuit.
The settlement will resolve a 2025 lawsuit, alleging Apple's advertisements created a "clear and reasonable consumer expectation" that Apple Intelligence features would be available with the launch of the iPhone 16. The lawsuit claimed Apple's products "offered a significantly limited or entirely absent version of Apple Intelligence, misleading consumers about its actual utility and performance."
In a statement to The Verge, Apple spokesperson Marni Goldberg said the company "resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users." You can read Apple's full statement at the bottom of this article.
Apple previewed a series of AI-powered features coming to its iPhones during its June 2024 Worldwide Developers Conference, including a more personalized Siri. But when the iPhone 16 launched in September, Apple labeled it as "built for Apple Intelligence," as it lacked many of the capabilities it teased months earlier.
Instead, Apple gradually rolled out its new AI features, including Image Playground, Genmoji, and a ChatGPT integration in Siri. The company also delayed the launch of its more personalized Siri, which is now expected to arrive later this year.
Last April, the National Advertising Division recommended that Apple "discontinue or modify" its "available now" claim for the Apple Intelligence page on its website. Apple also pulled an iPhone 16 ad showing actor Bella Ramsey using the AI-upgraded Siri.
Apple denied any wrongdoing. Here's the company's full statement:
Since the launch of Apple Intelligence, we have introduced dozens of features across many languages that are integrated across Apple's platforms, relevant to what users do every day, and built with privacy protections at every step. These include Visual Intelligence, Live Translation, Writing Tools, Genmoji, Clean Up and many more.
Apple has reached a settlement to resolve claims related to the availability of two additional features. We resolved this matter to stay focused on doing what we do best, delivering the most innovative products and services to our users.
Ah, nostalgia. The taste of Mum's secret-sauce pasta, the endless summers, that one time Fat Nadya was going to show her boobs in the bushes behind Ms Wolowitz house ... and soon, dear reader, the undescribable pleasure of wasting time selecting cars, fire hydrants, traffic lights and the like for the fourteenth time just to read or buy something online.
For Google has declared that the Olden Ways are over, as these are agentic times, and it is necessary to let your computer do the routine stuff for you, like booking a month-long cruise in the Caribbean or something. So, no more old captcha: it's ReCaptcha Version II now, and you, yes you, will from now on be obligated to prove you're not (another) machine by taking a picture with your smartphone (machine) which, of course, must be authenticated itself to the Google Machine, to prove you're not a, you guessed it, machine. (Oblig funny monkey clip here [Video not reviewed. -Ed])
Somehow I got the feeling that the only purpose of a human in the not so distant future will be to sign off (minute 21 and beyond) for a machine, and pay its bills.
I guess that's called winning -- by the machines.
https://archive.ph/TCsXg (Actually a NYT article)
There is a moment when internet companies get the stink of death on them. For AOL, it was 2003, when it became clear that its users were abandoning its clunky dial-up internet service for far-faster broadband. For Yahoo, it was 2015, when their last-ditch acquisition spree failed, and they sold themselves to Verizon.
For Meta, that time is now. I believe the company — one of the most powerful media organizations in the world and one of the most valuable members of the S&P 500 — is at the start of a long, slow decline that will trigger aftershocks to our economy and our society.
It may be named Meta, but the company's biggest asset is still Facebook. Started from a Harvard dorm, the original online social network has dominated our world for two decades. Its three billion users are still bigger than any single country. Its platforms can help sway an election, fuel an insurrection or spark a genocide.
But if you look carefully, you can see chinks in the armor. Meta's earnings are starting to show the strain from years of growing consumer disaffection and reckless spending. The latest earnings, released on April 29, revealed a dip in user numbers for the first time since it started reporting these figures. And the slumping stock confirms what we have all known in our guts for a while: This is a company entering its zombie era.
This directive — first uncovered by Russian independent journalist Maria Kolomychenko, and reported by the Russian version of Radio Free Europe — [site in Russian- Ed] marks a major escalation in the Kremlin's long-running effort to control what its citizens see online and cut them off from the open internet.
The subsidy document allocates roughly 20 billion rubles annually for the operation of ASBI. This figure corroborates a September 2024 report that authorities intended to spend 60 billion rubles (around $650 million) over the next five years to update its internet-blocking system.
A critical detail is that the Russian government hasn’t defined what "92% effectiveness" actually means. Kolomychenko noted it could refer to the number of VPN applications removed from stores, the volume of traffic blocked, or the percentage of people unable to connect.
This marks a fundamental shift in how Russia governs the internet. Rather than chasing down individual services one by one, the state is now pouring money into the underlying network layer to build a permanent filter.
By placing these filters directly in the network path, Roskomnadzor aims to make bypassing blocks a constant uphill battle for users.
Since the invasion of Ukraine, censorship has expanded from specific news outlets to targeting major social media platforms and messaging tools.
Millions of websites have been blocked, and as of 2025, authorities have started cutting off mobile internet across entire regions. They’ve also officially blocked major platforms like WhatsApp and Telegram.
So far, more than 400 VPN services have been banned, with over 1,000 restricted, according to another Russian journalist, Aleksandar Djokic. This, even though it’s still legal to use a VPN in Russia.
Starting April 15, 2026, major Russian service providers are legally required to detect whether a user is connected via a VPN, raising concerns about data privacy and potential future profiling.
At the same time, the Ministry of Digital Development is also pushing a new "foreign traffic tax". It would charge mobile users 150 rubles per gigabyte for any data over a 15GB monthly limit. This fee, which has been facing technical delays, hits the international routes VPNs rely on, making it too costly for most people to bypass the blocks.
Since their introduction in the 1960s, lasers have fueled major advances in science and everyday technology, from supermarket scanners to eye surgery. Traditional lasers operate by controlling photons, which are particles of light. Over the past two decades, researchers have expanded this concept to other particles, including phonons, which represent tiny units of vibration or sound. Learning to control phonons could unlock new capabilities, including access to unusual quantum effects such as entanglement.
A team from the University of Rochester and Rochester Institute of Technology has developed a new squeezed phonon laser that can precisely control vibrations at the nanoscale. This level of control may help scientists better understand gravity, particle acceleration, and the principles of quantum physics. In their study published in Nature Communications, the researchers explain how they guided these small units of mechanical motion to behave in a coordinated, laser-like manner.
Nick Vamivakas, the Marie C. Wilson and Joseph C. Wilson Professor of Optical Physics with the URochester Institute of Optics, previously demonstrated a phonon laser in 2019. In that work, phonons were trapped and levitated using an optical tweezer inside a vacuum. However, turning this concept into a practical tool for precise measurement required addressing a major limitation shared by both photon and phonon lasers: noise. These unwanted fluctuations can interfere with signals and reduce measurement accuracy.
“While a laser looks to the naked eye like a steady beam, there’s actually a lot of fluctuation, which causes noise when you’re using lasers for measurement,” says Vamivakas. “By pushing and pulling on a phonon laser with light in the right way, we can reduce that phonon laser fluctuation significantly.”
The researchers tackled this challenge by using a method known as squeezing to lower the thermal noise within the phonon laser. Reducing this background disturbance makes it possible to take more precise measurements. According to Vamivakas, this improvement allows acceleration to be measured more accurately than with approaches that rely on photon lasers or radio frequency waves.
With its enhanced sensitivity, the phonon laser could become a valuable tool for measuring gravity and other forces with high precision. This capability may support new navigation technologies. Scientists have proposed quantum compasses as highly accurate, “unjammable” alternatives to GPS navigation that do not depend on satellites. Vamivakas is interested in exploring whether phonon lasers could contribute to the development of such systems.
Journal Reference: Zhang, K., Xiao, K., Bhattacharya, M. et al. A two-mode thermomechanically squeezed phonon laser. Nat Commun 17, 2882 (2026). https://doi.org/10.1038/s41467-026-70564-3
The Trump administration is said to be discussing an executive order that would establish a government review process for new AI models before they're released to the public, The New York Times has reported, citing unnamed U.S. officials.
The proposed order would create an "AI working group" of tech executives and government officials to develop oversight procedures, with White House staff briefing leaders from Anthropic, Google, and OpenAI on the plans last week. These discussions, if true, would represent a sharp departure from the administration's current stance as something of a deregulatory champion — immediately upon taking office, the Trump administration revoked a Biden-era executive order addressing AI risks.
The sudden reversal coincides with a leadership vacuum in White House AI policy. David Sacks, who led the administration's deregulation push as AI czar, left the role in March, with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent having since taken a more active role in shaping AI policy, according to The New York Times.
The new approach sounds a lot like the UK's AI Security Institute model, where government bodies evaluate frontier models against safety benchmarks before and after deployment. Officials told the New York Times that the NSA, the Office of the National Cyber Director, and the Director of National Intelligence could oversee the review. Critically, the system would grant the government early access to models without blocking their release.
Perhaps unsurprisingly, the catalyst for all this appears to have been Anthropic’s Mythos model, which the company’s marketing described as capable of finding thousands of critical software vulnerabilities and too dangerous for public release.
That naturally attracted a lot of unwanted government attention at a time when the Trump administration is already locking horns with Anthropic over the collapsed $200 million Pentagon contract. The Pentagon designated Anthropic a supply chain risk after the company refused to remove guardrails on autonomous weapons and mass surveillance, though a federal judge later called that "Orwellian."
The NSA has already used Mythos to assess vulnerabilities in government Microsoft software deployments, even as other agencies remain cut off from Anthropic's tools. Some analysts have questioned whether Mythos's capabilities justify Anthropic's dramatic framing, with some studies finding that cheaper models can achieve comparable results in vulnerability discovery.
A White House official told The New York Times that talk of an executive order is "speculation," and that any announcement would come from Trump himself. Dean Ball, a former senior adviser on AI in the Trump administration, told the newspaper that officials are trying to avoid overregulation while keeping pace with the technology, calling it a “tricky balance.”
Daemon Tools users: It's time to check your machines for stealthy infections, stat:
Daemon Tools, a widely used app for mounting disk images, has been backdoored in a monthlong compromise that has pushed malicious updates from the servers of its developer, researchers said Tuesday.
Kaspersky, the security firm reporting the supply-chain attack, said it began on April 8 and remained active as of the time its post went live. Installers that are signed by the developer's official digital certificate and downloaded from its website infect Daemon Tools executables, causing the malware to run at boot time. Kaspersky didn't explicitly say so, but based on technical details, the infected versions appear to be only those that run on Windows. Versions 12.5.0.2421 through 12.5.0.2434 are affected. Neither Kaspersky nor developer AVB could be contacted immediately for additional details.
Infected versions contain an initial payload that collects MAC addresses, hostnames, DNS domain names, running processes, installed software, and system locales. The malware sends them to an attacker-controlled server. Thousands of machines in more than 100 countries were targeted. Out of the many machines infected, about 12 of them, belonging to retail, scientific, government, and manufacturing organizations, have received a follow-on payload—an indication that the supply-chain attack targets select groups.
[...] One of the follow-on payloads pushed to about a dozen organizations was what Kaspersky described as a "minimalistic backdoor." It has the ability to execute commands, download files, and run shellcode payloads in memory—making the infection harder to detect.
Kaspersky said that it observed a more complex backdoor dubbed QUIC RAT, installed on a single machine belonging to an educational institution located in Russia. Initial analysis found that it can inject payloads into the notepad.exe and conhost.exe processes and supports a variety of C2 communication protocols, including HTTP, UDP, TCP, WSS, QUIC, DNS, and HTTP/3.
The 100 infected organizations were primarily located in Russia, Brazil, Turkey, Spain, Germany, France, Italy, and China. Kaspersky's visibility into the attack is limited because it's based solely on telemetry provided by its own products.
[...] More recent supply-chain attacks have hit Trivy, Checkmarx, and Bitwarden and more than 150 packages available through open source repositories. Last year, there were at least six notable such attacks.
Anyone who uses Daemon Tools should take time to scan the entirety of their machines using reputable antivirus software. Windows users should additionally check for indicators of compromise listed in the Kaspersky post. For more technically advanced users, Kaspersky recommends monitoring "suspicious code injections into legitimate system processes, especially when the source is executables launched from publicly accessible directories such as Temp, AppData, or Public."
The disbelief was palpable when Mozilla's CTO last month declared that AI-assisted vulnerability detection meant "zero-days are numbered" and "defenders finally have a chance to win, decisively."
[...]
Mindful of the skepticism, Mozilla on Thursday provided a behind-the-scenes look into its use of Anthropic Mythos—an AI model for identifying software vulnerabilities—to ferret out 271 Firefox security flaws over two months. In a post, Mozilla engineers said the finally ready-for-prime-time breakthrough they achieved was primarily the result of two things: (1) improvement in the models themselves and (2) Mozilla's development of a custom "harness" that supported Mythos as it analyzed Firefox source code.
[...]
The biggest differentiating factor was the use of an agent harness, a piece of code that wraps around an LLM to guide it through a series of specific tasks. For such a harness to be useful, it requires significant resources to customize it to the project-specific semantics, tooling, and processes it will be used for.Grinstead described the harness his team built as "the code that drives the LLM in order to accomplish a goal. It gives the model instructions (e.g., 'find a bug in this file'), provides it tools (e.g., allowing it to read/write files and evaluate test cases), then runs it in a loop until completion."
[...]
Thursday's behind-the-scenes view includes the unhiding of full Bugzilla reports for 12 of the 271 vulnerabilities Mozilla discovered using Mythos and, to a lesser extent, Claude Opus 4.6.
[...]
At least one researcher said Thursday that a cursory look at the reports showed they were "pretty impressive."
[...]
The critics are right to keep pushing back. Hype is a key method for inflating the already high puffed-up valuations of AI companies. Given the extensive praise Mozilla has given to Mythos, it's easy for even more trusting people to wonder: What's it getting in return? Far from settling the debate, Thursday's elaborations are likely to only further stoke the controversy.
As Americans struggle with the price of gas and groceries, Starbucks CEO Brian Niccol made the case for a $9 cup of coffee while speaking with The Wall Street Journal's What's News AM podcast.
"We're doing really well with Gen Z and millennials, and then really had strong performance across all income cohorts," Niccol said. "It can start with as little as $3 for a traditional cup of coffee. And then obviously you can build your way into all sorts of customized drinks that people love that move that ticket up."
Podcast host Luke Vargas asked, "You mentioned sort of strength across income cohorts. We've heard so much this week about the K-shaped economy. Fortunes for some Americans, very different than for others. Is that not really something that's coming up in your sales?"
"You know, we're not seeing that in our business," Niccol said. "What we're seeing is people, you know, they want to have a special experience, and regardless of what your income level is. In some cases, you know, a $9 experience does feel like you're splurging. And then, what that means is we have to make it worthwhile, right?"
"And then in other cases, certain people believe, 'Well, this is a really affordable premium experience.' Because they're saying like, 'Well it's less than $10 and I get a really premium experience,'" Niccol said. "So, regardless of where you're stationed in those income cohorts, we want to make that experience worth your while. And what we know is what's definitely something that drives that value is to be able to have a great seat, have a great moment of connection with a barista."
"We just saw on Friday, I'm sure you've seen the US consumer confidence reading, perceptions of the economy are worse than they've been since the '70s, since '08, since the pandemic," Vargas said. "These are some pretty bad reference points here. Just how do you market to that consumer?"
"Yeah. Look, when we've spent the time talking to customers, 'What is it that you're looking for in your experience?' They do talk about how they use their Starbucks experience as a moment of escapism. And my hope is we get more than our fair share of all those occasions," replied Niccol.
"Part of that is you're not playing the value game," Vargas suggested.
"Well, I think we're just playing it in a different way, which is the way we're going to play the value game is you're going to feel like it was worth it," said Niccol. "And it's not going to be a game of discounting or one-off promotions. I think people actually really do appreciate knowing, "Hey, if this is a $3 cup of coffee or a $5 latte, I know I'm going to get a great experience for that $5 experience, I'm in."